home *** CD-ROM | disk | FTP | other *** search
- Path: news.delcoelect.com!c23jwd
- From: c23jwd@kocrsv01.delcoelect.com (Jeffrey William Davis)
- Newsgroups: comp.sys.amiga,comp.sys.amiga.hardware,comp.sys.amiga.programmer,no.amiga
- Subject: Re: FWD: Fate of 68080
- Date: 24 Jan 1996 20:24:00 GMT
- Organization: Delco Electronics Corp.
- Distribution: usa
- Message-ID: <4e64h0$q49@kocrsv08.delcoelect.com>
- References: <4d3c27$n6c@rs18.hrz.th-darmstadt.de> <1141.6593T1100T790@norconnect.no>
- NNTP-Posting-Host: koptsw19.delcoelect.com
-
- In article <1141.6593T1100T790@norconnect.no>,
- Kenneth C. Nilsen <kenneth@norconnect.no> wrote:
- >>The current/near future top-of-the-line is the PPC 604@150 MHz.
- >>The 300 MHz was foreseen for the 64-bit PPC 620, but the situation about this
- >>beast is unclear since the projected performance gain is not enough these
- >>days. Instead a design rework will be done, maybe they call it 630 or so.
- >>Anyway the race for MHz sounds silly to me, since the memory can't keep pace
- >>with it, except for large (and expensive) caches. 700 MHz is science fiction
- >>I'd say.
- >
- >Yeah, you got a point. But wouldn't it be possible to write data parallell to
- >memory, I mean, using double memory bus writes so that in principle you can
- >write not 1 byte, but 2 bytes simultainously or even 4 bytes etc. (hope you
- >get the picture) ?
-
- This 'principle' which you loosely describe is already commonly done, and
- 64, 128bits (16 bytes) or more are written and read simultaneously.
- Writing data has never really been the problem, it's READING it that
- causes a bottleneck that is tough to get around.
-
- When writing, the data can be handed to a cache that will write it whenever
- it has the time. If a read is attempted on that data while it is still
- in the cache, you simply deliver (read) the cache value. Granted, there
- are numerous ways to get data into the cache (including reads) but that's
- not important here.
-
- When reading, the processor expects the data NOW. There is a limited
- amount of time between when the processor is able to give an address and
- when it expects the data to be ready. If you take longer than this to
- retrieve the data, the processor has to wait; hence, wait-states.
-
- With RAM technology rapidly falling behind processor speed, we need to
- somehow 'predict' which piece of data the processor will want and (at
- the very least) begin reading it before the processor even asks for it.
- Then place it in a device (cache) which can deliver it to the processor
- more quickly. This is where the 'expensive' cache comes in.
-
- CPUs having asynchronous address and data busses make it easier to
- manage the high MHz, since it takes less time to latch an address than
- it does to complete a R/W cycle on the data bus. This allows you to
- grab multiple addresses and begin working on them simultaneously while
- the data busses are saturated; continually trying to keep the data bus
- from ever having to wait.
-
- Unfortunately, the only perfect prediction would be one that could
- deliver any piece of memory on demand as fast as the processor can
- accept it - hence the < 1ns DRAM @ 700MHz! Not likely to happen in
- the near future. The further the CPU MHz fly past the current RAM
- technology, the more rediculous it becomes.
-
- --
- =======================================================================
- Jeffrey W. Davis (317)451-0503 Domain: c23jwd@eng.delcoelect.com
- Software Engineer UUCP: deaes!c23jwd
- Delco Electronics Corporation GM: 8-322-0503 Mail: CT40A
-